This guide serves to illustrate an example walk-through of a typical beNNch use case. Where the README in the repository is designed to be generally applicable, this guide is created to be explicit and walk through a concrete example.
Open ./config/ and select the benchmark you want to execute. In the corresponding config file (here: hpc_benchmark_31_config.yaml) you can choose the parameters that define the benchmark such as the model parameters and the machine parameters. Note that lists are supported for exploring parameter spaces (here: see the list 1,2,4 in line 20 — this will create a job for each element in the list, thus running a scaling experiment across nodes).
For using the backend simulator, beNNch is based on the module system. This means that beNNch expects the simulator to be loadable via the following command:
module load <simulator>/<version>/<variant>
which in our case is
module load nest-simulator/3.2-bm/default
See the section “First steps: configure your simulation” of the beNNch README on how to install a simulator such that it is module loadable using Builder.
Executing the benchmark file that corresponds to the config file we just edited (here: benchmarks/hpc_benchmark_31.yaml) submits the jobs to the scheduler and displays them in a table. The build job will check whether the underlying simulator (here: nest-simulator) is already installed, and if not, install it using Builder. The bench jobs are the actual simulation runs; here we created three runs since we gave a list of nodes in the config file.
This overview also displays the id of the job, telling you where the results can be found.
The jobs are submitted with a dependency: first, the build job finishes; afterwards, the simulation jobs run.
Here you can see that JUBE creates a folder for the experiment, indexed with the id of the job: 000005 (the other folders are from different experiments). The default location is ../benchmark_results/HPC for the hpc_benchmark model, but this can be changed in benchmarks/hpc_benchmark_31.yaml under outpath.
Looking at the created files, we see folders of the individual jobs that constitute the experiment containing the individual simulation results, and the summarized timing results in <hash>.csv as well as a first plot of the results in <hash>.png. Here, <hash> is a randomly created uuidgen with no internal meaning.
By default, the results are stored in the following gin repository: https://gin.g-node.org/nest/beNNch-results.git. If you want to exchange this for your own destination, you can follow the optional step in the initialization procedure:
git submodule set-url -- results <new_url>
Before running the analysis, we need to set the scaling_type for plotting either across threads or nodes and the outpath defined in benchmarks/hpc_benchmark_31.yaml. This information is typically similar across different experiments of the same model, therefore we store it in this file and don’t have to provide it everytime we want to analyze.
The analysis config makes our analysis command very simple: after changing to the results/ directory, we only need to provide the id of the experiment.
We see that the output file containing the summarized timing information is added to the results/ directory (here highlighted in light green).
Next, we need to sync the repository in order to add the new results.
One of the key features of beNNch is the metadata tracking. This allows us to not have to worry about what the hashes in the list of Step 10 mean; we can simply choose the results we want to inspect by their metadata. Technically this is achieved through git annex view. In this example, we want to select for benchmarks that are run with the nest-simulator as backend and use the wildcard * operator to select runs where the simulator-version starts with a 3.
In git annex views, the wildcard operator has an additional function: it sorts the results into folders for each metadata key, where * appears as the desired value. This is visible in the resulting folder hierarchy, where the folders 3.0-bm_Stages2020, 3.1, 3.1-bm and 3.2-bm are created. Inside of these folders lie the respective benchmark results (see the output of the ls command). Note that this functionality works on arbitrarily many levels: if we added machine="*" to the list of key-value pairs, each folder would contain subfolders with all existing machine names.
After selecting the benchmarks of interest, we can create a presentation of the results in a flip-book format. For this, we execute the command above.
Finally, we are able to flick through the various benchmark results.